首页> 外文OA文献 >Like trainer, like bot? Inheritance of bias in algorithmic content moderation
【2h】

Like trainer, like bot? Inheritance of bias in algorithmic content moderation

机译:喜欢教练,像机器人?算法内容中偏差的继承   适度

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

The internet has become a central medium through which `networked publics'express their opinions and engage in debate. Offensive comments and personalattacks can inhibit participation in these spaces. Automated content moderationaims to overcome this problem using machine learning classifiers trained onlarge corpora of texts manually annotated for offence. While such systems couldhelp encourage more civil debate, they must navigate inherently normativelycontestable boundaries, and are subject to the idiosyncratic norms of the humanraters who provide the training data. An important objective for platformsimplementing such measures might be to ensure that they are not unduly biasedtowards or against particular norms of offence. This paper provides someexploratory methods by which the normative biases of algorithmic contentmoderation systems can be measured, by way of a case study using an existingdataset of comments labelled for offence. We train classifiers on commentslabelled by different demographic subsets (men and women) to understand howdifferences in conceptions of offence between these groups might affect theperformance of the resulting models on various test sets. We conclude bydiscussing some of the ethical choices facing the implementers of algorithmicmoderation systems, given various desired levels of diversity of viewpointsamongst discussion participants.
机译:互联网已成为“网络公众”表达意见并进行辩论的中心媒体。令人反感的评论和人身攻击可能会阻碍这些空间的参与。自动化的内容审核旨在通过使用机器学习分类器来克服此问题,该分类器是在为违法行为手动注释的大型文本集上训练的。尽管这样的系统可以帮助鼓励更多的民事辩论,但它们必须在固有的,规范性的,可争辩的边界上行进,并服从于提供训练数据的评估者的特质规范。实施此类措施的平台的重要目标可能是确保它们不会过分偏向或违反特定的犯罪准则。本文提供了一些探索性方法,通过案例研究,使用现有的标记为犯罪的评论数据集,可以测量算法内容审核系统的规范偏差。我们训练分类器以不同人口统计子集(男性和女性)标记的注释,以了解这些群体之间的犯罪观念差异如何影响各种测试集上所得模型的性能。通过讨论,在讨论参与者之间给定各种期望的观点多样性水平的情况下,我们讨论了算法调节系统实施者面临的一些道德选择。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号